19 research outputs found

    Extensive evaluation of programming models and ISAs impact on multicore soft error reliability

    Get PDF
    To take advantage of the performance enhancements provided by multicore processors, new instruction set architectures (ISAs) and parallel programming libraries have been investigated across multiple industrial segments. It is investigated the impact of parallelization libraries and distinct ISAs on the soft error reliability of two multicore ARM processor models (i.e., Cortex-A9 and CortexA72), running Linux Kernel and benchmarks with up to 87 billion instructions. An extensive soft error evaluation with more than 1.2 million simulation hours, considering ARMv7 and ARMv8 ISAs and the NAS Parallel Benchmark (NPB) suite is presented

    Using machine learning techniques to evaluate multicore soft error reliability

    Get PDF
    Virtual platform frameworks have been extended to allow earlier soft error analysis of more realistic multicore systems (i.e., real software stacks, state-of-the-art ISAs). The high observability and simulation performance of underlying frameworks enable to generate and collect more error/failurerelated data, considering complex software stack configurations, in a reasonable time. When dealing with sizeable failure-related data sets obtained from multiple fault campaigns, it is essential to filter out parameters (i.e., features) without a direct relationship with the system soft error analysis. In this regard, this paper proposes the use of supervised and unsupervised machine learning techniques, aiming to eliminate non-relevant information as well as identify the correlation between fault injection results and application and platform characteristics. This novel approach provides engineers with appropriate means that able are able to investigate new and more efficient fault mitigation techniques. The underlying approach is validated with an extensive data set gathered from more than 1.2 million fault injections, comprising several benchmarks, a Linux OS and parallelization libraries (e.g., MPI, OpenMP), as well as through a realistic automotive case study

    Exploiting memory allocations in clusterized many-core architectures

    Get PDF
    Power-efficient architectures have become the most important feature required for future embedded systems. Modern designs, like those released on mobile devices, reveal that clusterization is the way to improve energy efficiency. However, such architectures are still limited by the memory subsystem (i.e., memory latency problems). This work investigates an alternative approach that exploits on-chip data locality to a large extent, through distributed shared memory systems that permit efficient reuse of on-chip mapped data in clusterized many-core architectures. First, this work reviews the current literature on memory allocations and explore the limitations of cluster-based many-core architectures. Then, several memory allocations are introduced and benchmarked scalability, performance and energy-wise, compared to the conventional centralized shared memory solution to reveal which memory allocation is the most appropriate for future mobile architectures. Our results show that distributed shared memory allocations bring performance gains and opportunities to reduce energy consumption

    Real world lessons that can assist construction organisations in implementing BIM to improve the OSH processes

    Get PDF
    Changing the way OSH management is performed with BIM is relevant. Several authors propose that real world cases need to be studied as there are few examples of studies covering BIM implementation for OSH. The UK PAS1192:6 introductory standard indicated some of the requisites and approaches to implementing BIM for OSH. Lessons from projects that have already implemented PAS 1192:6 will provide valuable inputs. This paper explores the stakeholders’ perceptions about benefits and barriers of adoption of BIM for OSH purposes using examples from a complex large project (Thames Tideway Tunnel). The methodology adopted was a survey of 39 project participants. The study focused on the following areas: collaboration, risk assessment, training and awareness, inspection of workplaces, work accidents, budget control, error detection, liaison between logistics and productivity. The implementation of the new BIM based approach to construction OSH management in this project shows that there is a very positive vision, namely in areas of risk assessment and training, in terms of the improvement of OSH management as well as of optimization of times and costs, better liaison between OSH and production, with increased production efficiency. This can potentially lead to a paradigm shift in OSH management in large projects.</p

    The impact of soft errors in memory units of edge devices executing convolutional neural networks

    No full text
    Driven by the success of machine learning algorithms for recognizing and identifying objects, there are significant efforts to exploit convolutional neural networks (CNNs) in edge devices. The growing adoption of CNNs in safety-critical embedded systems (e.g., autonomous vehicles) increases the demand for safe and reliable models. In this sense, this brief investigates the soft error reliability of two CNN inference models considering single event upsets (SEUs) occurring in register files, RAM, and Flash memory sections. The results show that the incidence of SEUs in flash memory sections tend to lead to more critical faults than those resulting from the occurrence of bit-flips in RAM sections and register files

    A Fast and Scalable Fault Injection Framework to Evaluate Multi/Many-core Soft Error Reliability

    No full text
    Increasing chip power densities allied to the continuous technology shrink is making emerging multiprocessor embedded systems more vulnerable to soft errors. Due the high cost and design time inherent to board-based fault injection approaches, more appropriate and efficient simulation-based fault injection frameworks become crucial to guarantee the adequate design exploration support at early design phase. In this scenario, this paper proposes a fast and flexible fault injector framework, called OVPSim-FIM, which supports parallel simulation to boost up the fault injection process. Aiming at validating OVPSim-FIM, several fault injection campaigns were performed in ARM processors, considering a market leading RTOS and benchmarks with up to 10 billions of object code instructions. Results have shown that OVPSim-FIM enables to inject faults at speed of up to 10,000 MIPS, depending on the processor and the benchmark profile, enabling to identify erros and exceptions according to different criteria and classifications

    Applying lightweight soft error mitigation techniques to embedded mixed precision deep neural networks

    No full text
    Deep neural networks (DNNs) are being incorporated in resource-constrained IoT devices, which typically rely on reduced memory footprint and low-performance processors. While DNNs' precision and performance can vary and are essential, it is also vital to deploy trained models that provide high reliability at low cost. To achieve an unyielding reliability and safety level, it is imperative to provide electronic computing systems with appropriate mechanisms to tackle soft errors. This paper, therefore, investigates the relationship between soft errors and model accuracy. In this regard, an extensive soft error assessment of the MobileNet model is conducted considering precision bitwidth variations (2, 4, and 8 bits) running on an Arm Cortex-M processor. In addition, this work promotes the use of a register allocation technique (RAT) that allocates the critical DNN function/layer to a pool of specific general-purpose processor registers. Results obtained from more than 4.5 million fault injections show that RAT gives the best relative performance, memory utilization, and soft error reliability trade-offs w.r.t. a more traditional replication-based approach. Results also show that the MobileNet soft error reliability varies depending on the precision bitwidth of its convolutional layers

    An extensive soft error reliability analysis of a real autonomous vehicle software stack

    No full text
    Automotive systems are integrating artificial intelligence and complex software stacks aiming to interpret the real world, make decisions, and perform actions without human input. The occurrence of soft errors in such systems can lead to wrong decisions, which might ultimately incur in life losses. This brief focuses on the soft error susceptibility assessment of a real automotive application running on top of unmodified Linux kernels, and considering two commercially available processors, and three cross-compilers. Results collected from more than 29 thousand simulation hours show that the occurrence of faults in critical functions may cause 2.16×2.16\times more failures on the system

    SOFIA: An automated framework for early soft error assessment, identification, and mitigation

    No full text
    The occurrence of radiation-induced soft errors in electronic computing systems can either affect non-essential system functionalities or violate safety-critical conditions, which might incur life-threatening situations. To reach high safety standard levels, reliability engineers must be able to explore and identify efficient mitigation solutions to reduce the occurrence of soft errors at the initial design cycle. This paper presents SOFIA, a framework that integrates: (i) a set of fault injection techniques that enable bespoke inspections, (ii) machine learning methods to correlate soft error results and system architecture parameters, and (iii) mitigation techniques, including: full and partial triple modular redundancy (TMR) as well as a register allocation technique (RAT), which allocates the critical code (e.g., application’s function, machine learning layer) to a pool of specific processor registers. The proposed framework and novel variations of the RAT are validated through more than 1739k fault injections considering a real Linux kernel, benchmarks from different domains and a multi-core Arm processor.</p

    Impact of radiation-induced soft error on embedded cryptography algorithms

    No full text
    With the advance of autonomous systems, security is becoming the most crucial feature in different domains, highlighting the need for protection against potential attacks. Mitigation of these types of attacks can be achieved using embedded cryptography algorithms, which differ in performance, area, and reliability. This paper compares hardware implementations of the eXtended Tiny Encryption Algorithm (XTEA) and the Advanced Encryption Standard (AES) algorithms. Results show that the XTEA implementation gives the best relative performance (e.g., throughput, power), area, and soft error reliability trade-offs
    corecore